Mirantis named a Challenger in 2024 Gartner® Magic Quadrant™ for Container Management  |  Learn More

CASE STUDY:

RBC Capital Markets Expands Risk Analysis Models for Data Scientists by Scaling to GPUs

Video: RBC & Winning the Roadmap Race: How to Work with Tech Vendors to Get the New Features You Need


The following excerpts are taken from the transcript of a panel discussion with RBC at Launchpad 2020.

Speakers: Manoj Agrawal, RBC; Tina Bustamante, RBC; Ada Mancini, Mirantis.

Why containerize applications?

Manoj Agrawal

Back in 2016, when we started our journey, one of our major focus areas was maturing DevOps capabilities. And what we found was it was challenging to adopt DevOps across applications with different shapes and sizes, different tech stacks, and as a financial industry, we do have a large presence of retail applications. So, making that work was challenging. This is where containers were appealing to us.

In those early days, we started looking at containers as a possible solution to create a standardization across different applications to have a consistent format. Other than that, we also saw containers as a potential technology that could be adopted across the enterprise, not just a small subset of applications. So that was really interesting to us.

In addition to that, containers came with schedulers like Kubernetes, or Swarm, which could do a lot more than traditional schedulers, for example resource management, failure management or scaling up and down, depending on an application’s or business’ requirements. So all those things were very appealing, it looked like a solution to a number of challenges that we’re facing. So that’s when we got started with containers.

Why adopt Swarm, then later Kubernetes?

Other than resource management, the failover management, as you can imagine, managing failover DR, those are never easy. With containers, we saw that as container schedulers, we saw that it kind of becomes a managed service for us.

Other aspects — we are a heavily regulated industry in capital markets, especially. So creating an audit-trail of events, who did what, when that’s important, and containers seem to provide all those aspects to us out-of-the-box.

The other thing that we saw with containers and the schedulers — we could simplify our risk management, we could control what application or which container gets deployed where, how they run, and when they run. All those aspects of schedulers did simplify or seem to simplify at that time, a lot of the traditional challenges, and that’s what was very appealing to us.

What were the tangible benefits of moving to containers?

It’s interesting, the benefits are larger, not just for developers. And the way I will answer this question is not from development to operations, but let me answer it from the operations to developers.

Operationally, the moment developers saw that applications can be deployed with containers relatively quickly, without having them on the call, or without them writing a long release note, they started seeing that benefit right away. That I don’t need to be there late in the evening, I don’t need to be there on call to create the environments or deploy in QA versus production versus DR. To them, it was like, do it right once, and then repeat that success across different environments.

So that was a big eye-opening for them, and they started realizing that, hey, look, I can free up my time now. I can focus on my core development and I don’t need to deal with the traditional operational issues. That was quite eye-opening for all of us, not just for developers and we started seeing that ROI very early on. Another thing I will say the developers talked about was, hey, I can validate this application on my laptop, I don’t need to be on servers. I don’t need all these servers. I don’t need to share my servers. I don’t need to depend on infrastructure teams or other teams to get their checklist done before I can start my work. I can validate on my laptop. That was another very powerful feature that empowered them.

The last thing I would say is that the software-defined aspects of technology, as an example, network, or storage — with a lot of the traditional things, developers have to call someone, they have to wait, and then they have to deal with tickets. Now they can do all a lot of these things themselves, they can define it themselves, and that’s very empowering. In the DevOps perspective, our move towards the left. The more control developers have, the better the product is, the better the quality of the product, the time to market improves, and just the overall experience, and the business benefits, they all start to improve.

FINANCIAL SERVICES

Cloud expertise for financial services.

Run business-critical applications on a cloud designed for financial services—backed by cloud experts with over a decade of experience.

LEARN MORE

How did you collaborate with Mirantis to integrate a new product feature?

From the product management perspective, I would say products are always evolving, and the capabilities can be at different stages of maturity. So, when we reviewed what our application teams and what our business was looking to do, one area that stood out was definitely the data science space. Our quantum data scientists really wanted to expand our risk analysis models, they were looking for larger scales to compute like a lot more computing power. We tried to see and come up with a way to be able to facilitate their needs.

It really came from an early concept of the idea of being able to leverage GPUs. We set up a small R&D team trying to see if this was something that would be feasible for us on our end, but based on different factors and considerations, and technical thinking involved in this, we just realized that the complexity that it would bring to our overall technical stack was not something that would be suitable to do on our own.

We reached out to Mirantis, and brought forth the concept of being able to scale the Kubernetes pods onto GPUs. We relied on their expertise, on their engineers, to think about being able to expand their Kubernetes offerings to be able to scale and, potentially, support running the pods on GPUs. It definitely was not something that came from one day to the next. That is, it involves a number of conversations. I’m happy to say, I would say in the recent months, it did become part of the Kubernetes product offering.

RBC Leverages Mirantis Kubernetes Engine to provide GPU computing power to quantum data scientists

Challenge

Data scientists needed more computing power to expand risk analysis models

Solutions

RBC collaborated with Mirantis to add GPU support to the Mirantis Kubernetes Engine container platform

Results

Quantum data scientists are able to run large, complex risk analysis models by scaling Kubernetes pods onto GPUs

RBC by the Numbers

$37 Billion

In Revenue

17

Million Clients

88,000

Employees

“Developers started realizing that, hey, look, I can free up my time now. I can focus on my core development, and I don’t need to deal with the traditional operational issues. That was quite eye-opening for us, and we started seeing the ROI very early on.”

— Manoj Agrawal

Head of RBC Capital Markets Compute and Data Fabric

Additional Case
Studies